Improving Convergence of CMA-ES Through Structure-Driven Discrete Recombination
نویسندگان
چکیده
Evolutionary Strategies (ES) are a class of continuous optimization algorithms that have proven to perform very well on hard optimization problems. Whereas in earlier literature, both intermediate and discrete recombination operators were used, we now see that most ES, e.g. CMA-ES, use only intermediate recombination. While CMA-ES is considered state-of-the-art in continuous optimization, we believe that reintroducing discrete recombination can improve the algorithms’ ability to escape local optima. Specifically, we look at using information on the problem’s structure to create building blocks for recombination. In evolutionary computation, a population of candidate solutions is evolved by applying mutation and recombination. Mutation alters elements of a single solution, while recombination combines elements from different individuals to create a new candidate solution. A typical recombination operator is the crossover operator, where a new solution is produced by essentially copying parts from different parents and glueing them together. When performed randomly, this process of crossover can disrupt optimized substructures present in a parent by only inheriting part of the substructure into the offspring. In the domain of GAs (genetic algorithms), research into detecting and using a problem’s structure to improve the performance of crossover recombination is ongoing (Harik, Lobo, and Sastry 2006; Thierens and Bosman 2011; Pelikan, Hauschild, and Thierens 2011). On the other hand, in Evolutionary Strategies (ES) (Bäck, Hoffmeister, and Schwefel 1991; Beyer and Schwefel 2002), research has moved away from the concept of crossover or discrete recombination. See for example the state of the art Covariance Matrix Adaptation Evolution Strategy (CMA-ES)(Hansen 2006), which only uses intermediate recombination in its optimization, calculating the point of gravity (weighted average) of the best current candidate solutions. When investigating infinite-valued SAT, or satisfiability in infinite-valued logics, which can be modelled as a continuous optimization problem over the domain [0, 1], with n the number of variables, we applied the standard CMA-ES Copyright c © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. as a solver and have significantly improved upon the state of the art (Schockaert, Janssen, and Vermeir 2012) (our results are to be published). Still, on a number of instances, the algorithm regularly fails to converge to the global optimum.We hypothesise – and initial experiments confirm this – that such infinite-valued SAT problems have an inherent structure to them that could be used to improve the likelihood of convergence to the global optimum. We aim to incorporate a more informed discrete recombination operator into CMA-ES, by explicitly detecting correlations between variables and creating clusters of variables that can be exchanged between candidate solutions, improving the algorithms’ ability to escape local optima. In the next sections, we explain how this structure can be derived from the covariance matrix used in CMA-ES and used in recombination. Clustering variables The CMA-ES algorithm relies on the adaptation of the covariance matrix of a multi-variate normal search distribution, from which new individuals are sampled. The covariance matrix is adapted to fit the search distribution to the contour lines of the function to be minimized. Thus, in this covariance matrix is captured an estimation of the correlations between variables, which can be used to derive a clustering of variables to be used in discrete recombination. The first step to achieve this clustering is building a weighted graph that represents the structure of the problem. For that, we use the eigendecomposition of the covariance matrix, which is already calculated within the CMA-ES. The correlation between two variables is determined by looking at the magnitudes of the corresponding components in each eigenvector. For each eigenvector, we take the product of the magnitudes of the two components in question. These products are summed, each product weighted by the eigenvalue of the eigenvector. The result of this method is that variables that have a relatively large magnitude in the same, important eigenvectors, will be assigned a larger weight in the structure graph. Pseudocode is shown in Algorithm 1. The weights need to be normalized so that, if represented as a matrix, elements on the diagonal equal 1, i.e. full auto-correlation for variables. Normalization is performed as follows: weightx,y = weightx,y √ weightx,x ∗ √ weighty,y (1) Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence
منابع مشابه
Injecting External Solutions Into CMA-ES
This report considers how to inject external candidate solutions into the CMA-ES algorithm. The injected solutions might stem from a gradient or a Newton step, a surrogate model optimizer or any other oracle or search mechanism. They can also be the result of a repair mechanism, for example to render infeasible solutions feasible. Only small modifications to the CMA-ES are necessary to turn inj...
متن کاملInvestigating the Local-Meta-Model CMA-ES for Large Population Sizes
For many real-life engineering optimization problems, the cost of one objective function evaluation can take several minutes or hours. In this context, a popular approach to reduce the number of function evaluations consists in building a (meta-)model of the function to be optimized using the points explored during the optimization process and replacing some (true) function evaluations by the f...
متن کاملBlind adapted, pre-whitened constant modulus algorithm
We examine the use of a blind adaptive “pre-whitening” filter to precede an equalizer adapted by the constant modulus algorithm (CMA). The idea is based on results presented in which the use of a (fixed, or non-adaptive) pre-whitening filter provides an isometry (i.e. geometry preserving transformation) between the combined channel-equalizer (or global space) and the equalizer tap space. As muc...
متن کاملAn Efficient Improvement of CMA-ES Algorithm for the Network Securi- ty Situation Prediction
Abstract: An improved covariance matrix adaptation evolution strategy algorithm (CMA-ES) is proposed and it is used to train the forecasting model of the network security situation in this paper. A new recombination strategy which adds a heuristic component is developed in the improved CMA-ES algorithm, and the search speed is accelerated. The experimental results show that, compare with origin...
متن کاملMirrored Sampling and Sequential Selection for Evolution Strategies
This paper reveals the surprising result that a single-parent non-elitist evolution strategy (ES) can be locally faster than the (1+1)-ES. The result is brought by mirrored sampling and sequential selection. With mirrored sampling, two offspring are generated symmetrically or mirrored with respect to their parent. In sequential selection, the offspring are evaluated sequentially and the iterati...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2012